This example demonstrates how word vector model PyTorch could be trained using federated learning with PySyft. We distribute the text data to two workers Bob and Alice to whom the model is sent and trained. Upon training the model the trained model is sent back to the owner of the model and used to make predictions or the embedding layer which consist of learnt word vectors could be used. Federated learning applied to word vectors could be a great way to analyze textual data without knowing the specifics of the text and risk invading privacy. In a real-time application , say understanding internal e-mail culture of a organization. In this example we learn a word embedding by trying to predict the next word given context of N words.
Hrishikesh Kamath - GitHub: @kamathhrishi
In [1]:
#Import modules required for PyTorch Neural Networks
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset
In [2]:
# Shakespeare Sonnet 2 as text to be learned
dataset = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
In [3]:
class Arguments():
def __init__(self):
self.batch_size = 1
self.test_batch_size = 1000
self.epochs = 10
self.lr = 0.01
self.momentum = 0.5 #<-We currenly do not support momentum
self.no_cuda = False
self.seed = 1
self.log_interval = 10
self.save_model = False
self.context_size=3
self.embedding_dim=10
In [4]:
args=Arguments()
In [5]:
#Define seed to maintain consistency
torch.manual_seed(args.seed)
Out[5]:
In [6]:
#Import PySyft library required for federated learning
import syft as sy
In [7]:
#Define Syft workers Bob and Alice for federated learning
hook = sy.TorchHook(torch) # <-- NEW: hook PyTorch ie add extra functionalities to support Federated Learning
bob = sy.VirtualWorker(hook, id="bob") # <-- NEW: define remote worker bob
alice = sy.VirtualWorker(hook, id="alice") # <-- NEW: and alice
In [8]:
#vocabulary of from the corpus
vocab = set(dataset)
word_to_ix = {word: i for i, word in enumerate(vocab)}
ix_to_word={word_to_ix[word]:word for word in word_to_ix}
In [9]:
class TextDataset(Dataset):
def __init__(self,text,transform=None):
"""arguments:
text (List of Strings): Text corpus
transform: List of transforms to be performed on the input data
"""
self.text = text
self.data=[]
self.targets=[]
self.transform = transform
#Create Trigrams
self.create_context()
def __len__(self):
return len(self.data)
def create_context(self):
'''Function used to seperate target and context words and convert them to torch tensors'''
context=[]
for i in range(len(self.text)-args.context_size):
vec=[]
for j in range(0,args.context_size):
vec.append(self.text[i+j])
context.append([vec,self.text[i+args.context_size]])
for words,target in context:
tensor=torch.tensor([word_to_ix[w] for w in words],dtype=torch.long)
self.data.append(tensor)
self.targets.append(torch.tensor([word_to_ix[target]], dtype=torch.long))
def __getitem__(self, idx):
sample=self.data[idx]
target=self.targets[idx]
if self.transform:
sample = self.transform(sample)
return sample,target
Use federated data loader to distribute dataset to workers.
In [10]:
federated_train_loader = sy.FederatedDataLoader( # <-- this is now a FederatedDataLoader
TextDataset(dataset)
.federate((bob, alice)),batch_size=args.batch_size)
Define Neural Network in PyTorch. The network is trained to predict the next word based on given context. Based on the trained model the required embedding is learnt.
In [11]:
class NGramLanguageModeler(nn.Module):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size,embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
log_probs = F.log_softmax(out,dim=1)
return log_probs
In [12]:
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab),args.embedding_dim,args.context_size)
optimizer = optim.SGD(model.parameters(), lr=args.lr)
In [13]:
def train():
model.train()
iteration=0
for context, target in federated_train_loader:
model.send(context.location)
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
context_idxs = context
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
loss = loss_function(log_probs,target[0])
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
model.get()
# Get the Python number from a 1-element Tensor by calling tensor.item()
# The loss decreased every iteration over the training data!
iteration+=1
if(iteration%100==0):
print(loss.get().item())
In [14]:
for epoch in range(0,args.epochs):
train()
print("EPOCH: ",epoch+1)
In [15]:
if (args.save_model):
torch.save(model.state_dict(), "word_vector.pt")
In [16]:
def SimilarPairs(model,vocab,inverse_vocab):
#Function to compute the most similar pairs
matrix=[]
for ref_index in range(0,len(vocab)):
Max=-10.0
Index=0
ref=model.embeddings(torch.LongTensor([ref_index]))
for i in range(0,len(vocab)):
cos = nn.CosineSimilarity(dim=1, eps=1e-6)
output = cos(ref,model.embeddings(torch.LongTensor([i])))
if(output.item()>Max and i!=ref_index):
Max=output.item()
Index=i
matrix.append([ix_to_word[ref_index],ix_to_word[Index],Max])
return(matrix)
In [17]:
similar_Pairs=SimilarPairs(model,word_to_ix,ix_to_word)
The word vectors learnt don't exactly capture meanings of actual words since it was trained on a smaller corpora.
In [18]:
#Similar pairs of first 20 words
similar_Pairs[1:20]
Out[18]:
And voilà! We now are training a real world Learning model using Federated Learning!
Of course, there are dozen of improvements we could think of. We would like the computation to operate in parallel on the workers, to update the central model every n
batches only, to reduce the number of messages we use to communicate between workers, etc.
On the security side it still has some major shortcomings. Most notably, when we call model.get()
and receive the updated model from Bob or Alice, we can actually learn a lot about Bob and Alice's training data by looking at their gradients. We could average the gradient across multiple individuals before uploading it to the central server, like we did in Part 4.
The above embeddings are not useful for practical purposes as they are trained on a very small corpus. Increasing corpus size could lead to more useful embeddings.
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
The easiest way to help our community is just by starring the repositories! This helps raise awareness of the cool tools we're building.
We made really nice tutorials to get a better understanding of what Federated and Privacy-Preserving Learning should look like and how we are building the bricks for this to happen.
The best way to keep up to date on the latest advancements is to join our community!
The best way to contribute to our community is to become a code contributor! If you want to start "one off" mini-projects, you can go to PySyft GitHub Issues page and search for issues marked Good First Issue
.
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!